Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Lightweight deep learning algorithm for weld seam surface quality detection of traction seat
Zijie HUANG, Yang OU, Degang JIANG, Cailing GUO, Bailin LI
Journal of Computer Applications    2024, 44 (3): 983-988.   DOI: 10.11772/j.issn.1001-9081.2023030349
Abstract120)   HTML6)    PDF (3404KB)(74)       Save

In order to address the low accuracy and speed of detection by manual and traditional automation methods for the weld seam surface of traction seat, a lightweight weld seam quality detection algorithm YOLOv5s-G2CW was proposed for the weld seam surface of traction seat. Firstly, the GhostBottleneckV2 module was applied as a replacement for the C3 module in YOLOv5s to reduce the number of parameters used in the model. Then, the CBAM (Convolutional Block Attention Module) was introduced into the Neck of the YOLOv5s model for integration of the weld features in two dimensions: channel and space. Also, the positioning loss function of the YOLOv5s model was improved into Wise-IoU, focusing on the predictive regression of ordinary quality anchor frames. Finally, the 13 × 13 feature layer used for the detection of large-sized objects in the YOLOv5s model was removed to further reduce the number of parameters used in the model. Experimental results show that, compared with the YOLOv5s model, the size of YOLOv5s-G2CW model reduces by 53.9%, the number of frames transmitted per second increases by 8.0%, and the mAP (mean Average Precision) value increases by 0.8 percentage points. It can be seen that the model is applicable to meet the requirements for real-time and accurate detection of the weld seam surface for traction seat.

Table and Figures | Reference | Related Articles | Metrics
Answer selection model based on pooling and feature combination enhanced BERT
Jie HU, Xiaoxi CHEN, Yan ZHANG
Journal of Computer Applications    2023, 43 (2): 365-373.   DOI: 10.11772/j.issn.1001-9081.2021122167
Abstract337)   HTML16)    PDF (1248KB)(159)       Save

Current main stream models cannot fully express the semantics of question and answer pairs, do not fully consider the relationships between the topic information of question and answer pairs, and the activation function has the problem of soft saturation, which affect the overall performance of the model. To solve these problems, an answer selection model based on pooling and feature combination enhanced BERT (Bi-directional Encoder Representations from Transformers) was proposed. Firstly, adversarial samples and pooling operation were introduced to represent the semantics of question and answer pairs based on the pre-training model BERT. Secondly, the relationships between topic information of question and answer pairs were strengthened by the feature combination of topic information. Finally, the activation function in the hidden layer was improved, and the splicing vector was used to complete the answer selection task through the hidden layer and classifier. Model validation was performed on datasets SemEval-2016CQA and SemEval-2017CQA. The results show that compared with tBERT model, the proposed model has the accuracy increased by 3.1 percentage points and 2.2 percentage points respectively, F1 score increased by 2.0 percentage points and 3.1 percentage points respectively. It can be seen that the comprehensive effect of the proposed model on the answer selection task is effectively improved, and both of the accuracy and F1 score of the model are better than those of the model for comparison.

Table and Figures | Reference | Related Articles | Metrics
Chinese named entity recognition based on knowledge base entity enhanced BERT model
Jie HU, Yan HU, Mengchi LIU, Yan ZHANG
Journal of Computer Applications    2022, 42 (9): 2680-2685.   DOI: 10.11772/j.issn.1001-9081.2021071209
Abstract519)   HTML23)    PDF (1391KB)(476)       Save

Aiming at the problem that the pre-training model BERT (Bidirectional Encoder Representation from Transformers) lacks of vocabulary information, a Chinese named entity recognition model called OpenKG + Entity Enhanced BERT + CRF (Conditional Random Field) based on knowledge base entity enhanced BERT model was proposed on the basis of the semi-supervised entity enhanced minimum mean-square error pre-training model. Firstly, documents were downloaded from Chinese general encyclopedia knowledge base CN-DBPedia and entities were extracted by Jieba Chinese text segmentation to expand entity dictionary. Then, the entities in the dictionary were embedded into BERT for pre-training. And the word vectors obtained from the training were input into Bidirectional Long-Short-Term Memory network (BiLSTM) for feature extraction. Finally, the results were corrected by CRF and output. Model validation was performed on datasets CLUENER 2020 and MSRA, and the proposed model was compared with Entity Enhanced BERT pre-training, BERT+BiLSTM, ERNIE and BiLSTM+CRF models. Experimental results show that compared with these four models, the proposed model has the F1 score increased by 1.63 percentage points and 1.1 percentage points, 3.93 percentage points and 5.35 percentage points, 2.42 percentage points and 4.63 percentage points, 6.79 and 7.55 percentage points, respectively in the two datasets. It can be seen that the comprehensive effect of the proposed model on named entity recognition is effectively improved, and the F1 scores of the model are better than those of the comparison models.

Table and Figures | Reference | Related Articles | Metrics
Voting instance selection algorithm based on learning to hash
Yajie HUANG, Junhai ZHAI, Xiang ZHOU, Yan LI
Journal of Computer Applications    2022, 42 (2): 389-394.   DOI: 10.11772/j.issn.1001-9081.2021071188
Abstract311)   HTML20)    PDF (574KB)(110)       Save

With the massive growth of data, how to store and use data has become a hot issue in academic research and industrial applications. As one of the methods to solve these problems, instance selection effectively reduces the difficulty of follow-up work by selecting representative instances from original data according to the established rules. Therefore, a voting instance selection algorithm based on learning to hash was proposed. Firstly, the Principal Component Analysis (PCA) method was used to map high-dimensional data to low-dimensional space. Secondly, the k-means algorithm was used to perform iterative operations by combining with the vector quantization method, and the hash codes of the cluster center were used to represent the data. After that, the classified data were randomly selected according to the proportion, and the final instances were selected by voting after several times independent running of the algorithm. Compared with the Compressed Nearest Neighbor (CNN) algorithm and the instance selection algorithm of linear complexity for big data named LSH-IS-F (Instance Selection algorithm by Hashing with two passes), the proposed algorithm has the compression ratio improved by an average of 19%. The idea of the proposed algorithm is simple and easy to implement, and the algorithm can control the compression ratio automatically by adjusting the parameters. Experimental results on 7 datasets show that the proposed algorithm has a great advantage compared to random hashing in terms of compression ratio and running time with similar test accuracy.

Table and Figures | Reference | Related Articles | Metrics
Three-stage question answering model based on BERT
Yu PENG, Xiaoyu LI, Shijie HU, Xiaolei LIU, Weizhong QIAN
Journal of Computer Applications    2022, 42 (1): 64-70.   DOI: 10.11772/j.issn.1001-9081.2021020335
Abstract596)   HTML27)    PDF (918KB)(393)       Save

The development of pre-trained language models has greatly promoted the progress of machine reading comprehension tasks. In order to make full use of shallow features of the pre-trained language model and further improve the accuracy of predictive answer of question answering model, a three-stage question answering model based on Bidirectional Encoder Representation from Transformers (BERT) was proposed. Firstly, the three stages of pre-answering, re-answering and answer-adjusting were designed based on BERT. Secondly, the inputs of embedding layer of BERT were treated as shallow features to pre-generate an answer in pre-answering stage. Then, the deep features fully encoded by BERT were used to re-generate another answer in re-answering stage. Finally, the final prediction result was generated by combining the previous two answers in answer-adjusting stage. Experimental results on English dataset Stanford Question Answering Dataset 2.0 (SQuAD2.0) and Chinese dataset Chinese Machine Reading Comprehension 2018 (CMRC2018) of span-extraction question answering task show that the Exact Match (EM) and F1 score (F1) of the proposed model are improved by the average of 1 to 3 percentage points compared with those of the similar baseline models, and the model has the extracted answer fragments more accurate. By combining shallow features of BERT with deep features, this three-stage model extends the abstract representation ability of BERT, and explores the application of shallow features of BERT in question answering models, and has the characteristics of simple structure, accurate prediction, and fast speed of training and inference.

Table and Figures | Reference | Related Articles | Metrics
Secure identity-based proxy signcryption scheme in standard model
MING Yang FENG Jie HU Qijun
Journal of Computer Applications    2014, 34 (10): 2834-2839.  
Abstract255)      PDF (850KB)(478)       Save

Concerning the proxy signcryption security problem in reality, motivated by Gus proxy signature scheme (GU K, JIA W J, JIANG C L. Efficient identity-based proxy signature in the standard model. The Computer Journal, 2013:bxt132), a new secure identity-based proxy signcyption scheme in the standard model was proposed. Proxy signcryption allowed that the original signcrypter delegated his authority of signcrption to the proxy signcrypter in such a way that the latter could generate ciphertext on behalf of the former. By combining the functionalities of identity-based signcryption and proxy signature scheme, the new scheme not only had the advantage of identity-based signcryption scheme, but also had the function of proxy signature scheme. Analysis results show that, under the assumption of Diffie-Hellman problem, the proposed scheme is confidential and unforgeable. Compared with the known scheme, the scheme requires 2 pairings computation in proxy key generation and 1 pairing computation in proxy signcryption. So it has higher computational efficiency.

Reference | Related Articles | Metrics
Three dimensional localization algorithm for wireless sensor networks based on projection and grid scan
TANG Jie HUANG Hongguang
Journal of Computer Applications    2013, 33 (09): 2470-2473.   DOI: 10.11772/j.issn.1001-9081.2013.09.2470
Abstract598)      PDF (561KB)(377)       Save
The paper proposed a method to solve the shortcomings of the current Wireless Sensor Network (WSN) three-dimensional localization algorithm in terms of accuracy and complexity. The raster scan was used to resolve the projection cross domain of the neighboring anchor nodes on the two coordinate planes, and got the corresponding positions of the unknown nodes on the two coordinate planes, thus ultimately realizing the three-dimensional position estimate. Finally, the locations of the unknown nodes in three-dimension were estimated. The simulation result shows that when 200 sensor nodes were deployed randomly confined to the space of 100m*100m*100m, the coverage ratio of unknown nodes reached 99.1%, and the relative error decreased to 0.5533. The use of projection reduced the complexity of the algorithm efficiently.
Related Articles | Metrics
Research status and development trend of human computation
YANG Jie HUANG Xiaopeng SHENG Yin
Journal of Computer Applications    2013, 33 (07): 1875-1879.   DOI: 10.11772/j.issn.1001-9081.2013.07.1875
Abstract805)      PDF (790KB)(508)       Save
Human computation is a kind of technology to combine the human ability with the distributed theory to solve problems that computer cannot solve. The concept of human computation and its properties were introduced. Meanwhile, the distinctions of human computation with many other similar concepts got clarified. According to the reference review, the current research methods and design criterion of human computation were sorted out. Finally, research directions and development trends of human computation were discussed.
Reference | Related Articles | Metrics
Application of high-precision clock-synchronous method in hydrophone's linear array
CEHN Jin DUAN Fajie JIANG Jiajia CHANG Zongjie HUA Xiangning LI Yanchao
Journal of Computer Applications    2013, 33 (02): 600-602.   DOI: 10.3724/SP.J.1087.2013.00600
Abstract870)      PDF (555KB)(378)       Save
The paper proposed a precise full-array-synchronization data acquisition clock generation and transmission method to handle the hydrophone's data acquisition synchronization problem in the ocean underwater acoustic detection. By using the independent high-precision clock source and asynchronous differential transmission lines, the long distance synchronous acquisition of the hydro-phone's array was realized, which was characterized with high anti-jamming capability and so on. The detailed model of the full-array-synchronization principle was analyzed and the prototype system was established. Circuits experiment was carried out to verify the feasibility of the entire system. Recovered clock's time delay in the acquisition node was less than 165ns when the low-speed synchronous clock signal was transmitted by the unshielded twisted pair of 18 meters length. The experimental results show that the proposed method has good effect when being applied to transfer the standard clock of data acquisition in the linear array.
Related Articles | Metrics
Application of particle filter algorithm in traveling salesman problem
WU Xin-jie HUANG Guo-xing
Journal of Computer Applications    2012, 32 (08): 2219-2222.   DOI: 10.3724/SP.J.1087.2012.02219
Abstract1295)      PDF (626KB)(335)       Save
The existing optimization algorithm for solving the Traveling Salesman Problem (TSP) easily falls into local extremum. To overcome this shortcoming, a new optimization method based on the particle filter, which regarded the searching process of the best route of TSP as a dynamic time-varying system, was brought forward. The basic ideas using particle filter principle to search the best route of TSP were enunciated, and its implementation procedures were given. In order to reduce the possibility of sinking into local extreme, the crossover and mutation operator of Genetic Algorithm (GA) was introduced into the new optimization algorithm to enhance the variety of particles in the sampling process. Finally, the simulation experiments were performed to prove the validity of the new method. The new optimization method based on particle filter can find better solutions than those of other optimization algorithms.
Reference | Related Articles | Metrics
Optimal algorithm for FIR digital filter with canonical signed digit coefficients
TAN Jiajie HUANG Sanwei ZOU Changqin
Journal of Computer Applications    2011, 31 (06): 1727-1729.   DOI: 10.3724/SP.J.1087.2011.01727
Abstract1101)      PDF (462KB)(443)       Save
In order to save the resources of the Finite Impulse Response (FIR) filter and increase the running speed, it was proposed to use the Least Mean-Square-Error (LMSE) to transfer the float point coefficients filter to the Canonical Signed Digit (CSD) filter. The FIR filter was implemented by the cascades structure, which conjugated pairs of zeros into two basic sections. First, all zeros of the digital filter were calculated, which were made of two cascade sections for an FIR. And then the coefficients of the first cascade were transferred to fixed point. Next step was to quantize the second cascade coefficients into fixed point. To eliminate the finite word-length effects, the LMSE was adopted to compensate zeros in this step. Finally, all the fixed point coefficients were quantized into CSD. In order to prove the effectiveness of the two methods, and the FIR filter was also designed with simple quantized coefficients. The magnitude responses of two methods show that the LMSE quantization is more effective than that of the simple quantization.
Reference | Related Articles | Metrics
Output code algorithm for ierarchical error correcting based on KNNModel
Yi yiXIN Gong-de GUO Li-fei CHEN Jie HUANG
Journal of Computer Applications    2009, 29 (11): 3051-3055.  
Abstract1598)      PDF (990KB)(1161)       Save
Error Correcting Output Codes (ECOC) is an effective algorithm to handle multi-class problem; however, the ECOC coding is only on the class level and the ECOC matrix is pre-designed. A novel classification algorithm based on hierarchical ECOC was proposed. The algorithm first used KNNModel to build multiple clusters on a given dataset and chose few clusters for each class as representatives to construct a hieratical coding matrix in training phase, and then the matrix was used to train each single classifier. In testing phase, the proposed method makes the most of the merits of KNNModel and ECOC through models combination. Experimental results in the UCI data sets show the effectiveness of the proposed method.
Related Articles | Metrics
Novel improvement based on stable path for MAODV protocol
Jie HU Bin CHEN Xiang-nan MA Xiao-jing HE
Journal of Computer Applications    2009, 29 (11): 2904-2907.  
Abstract1719)      PDF (791KB)(1147)       Save
The multicast tree in MAODV is reconstructed frequently because of the nodes mobility, making the cost of routing and delay of transmission increase significantly. The neighbor change ratio based stable path selection method was proposed to overcome the shortcomings. And a new neighbor change ratio calculation method was put forward, which did not need to send Hello messages timely. Based on this new method, a stable path based MAODV (SP-MAODV) multicast routing protocol was given with a stable path selection and less hops. The simulation results on data packet transmission rate, routing overhead, average end-to-end delay and delay jitter show that the new protocol reduces interruption probability of the path and improves performance of the protocol.
Related Articles | Metrics